本文介绍了一个框架,用于合成双皮亚机器人步行,该框架通过数据驱动的台阶(S2S)动力学模型适应未知环境和动态误差。我们首先合成一个S2S控制器,该S2S控制器使用脚部的S2S动力学从混合线性倒置摆(H-LIP)模型中稳定步行。接下来,通过经典的自适应控制方法在线学习了机器人S2S动力学的数据驱动表示。因此,通过适当的连续输出合成捕获数据驱动的S2S控制器和低级跟踪控制器,可以通过适当的连续输出合成来实现所需的离散脚放置。所提出的方法是在仿真的3D两足机器人,Cassie和改进的参考速度跟踪的模拟中实现的。所提出的方法还能够实现步行行为,以适应未知载荷,不准确的机器人模型,外部干扰力,偏置速度估计和未知斜率。
translated by 谷歌翻译
人类能够以显着的敏捷性和轻松的方式谈判计划和计划外行为。本文的目的是系统地研究这种人类行为向两足步行机器人的翻译,即使形态本质上不同。具体而言,我们从计划和计划外的下台开始的人类数据开始。我们从人类减少阶层建模的角度分析了这些数据,编码质量(COM)运动学和接触力的中心,这使这些行为将这些行为转化为双皮德机器人的相应降低阶模型。我们通过基于非线性优化的控制器将所得的行为嵌入了两足机器人的全阶动力学中。最终结果是在不足的步行机器人上模拟中计划和计划外的下台。
translated by 谷歌翻译
在本文中,我们全能地提出了一种基于混合线性倒置的方法(H唇),用于合成和稳定3D足底双模行走,重点是彻底的硬件实现。提出了H-唇缘以捕获机器人行走的欠置和致动部分的基本组成部分。然后基于H唇直接合成机器人行走步态。我们全面地表征了H唇的周期性轨道,并通过其步骤 - 步骤(S2S)动力学可证明步骤稳定,然后用于近似于质量中心的水平状态的S2S动态(COM)机器人散步。近似设施基于H唇的步进控制器,提供所需的步长,以稳定机器人行走。通过实现所需的步骤尺寸,机器人实现了动态且稳定的行走。在欠扰动的BipeDal机器人Cassie的模拟和实验中完全评估了该方法,其展示了具有高通用和鲁棒性的动态行走行为。
translated by 谷歌翻译
由于机器人的脚下缺乏致动,全球地位控制是一个挑战性问题。在本文中,我们应用基于混合的倒立摆(H唇)踩踏3D废除后的双模型机器人进行全球位置控制。H-Lip行走的步骤步骤(S2S)动态近似于机器人行走的实际S2S动态,其中步长被认为是输入。因此,基于H唇的反馈控制器大致控制机器人表现得像H唇,它在误差不变集中保持的差异。模型预测控制(MPC)应用于3D中的全球位置控制的H唇。然后,H唇踩踏然后产生用于跟踪机器人的所需步进尺寸。此外,转向行为与步骤规划集成。拟议的框架在与概念验证实验中的模拟中验证了在模拟中的3D欠扰动的双模型机器人Cassie。
translated by 谷歌翻译
Given the increasingly intricate forms of partial differential equations (PDEs) in physics and related fields, computationally solving PDEs without analytic solutions inevitably suffers from the trade-off between accuracy and efficiency. Recent advances in neural operators, a kind of mesh-independent neural-network-based PDE solvers, have suggested the dawn of overcoming this challenge. In this emerging direction, Koopman neural operator (KNO) is a representative demonstration and outperforms other state-of-the-art alternatives in terms of accuracy and efficiency. Here we present KoopmanLab, a self-contained and user-friendly PyTorch module of the Koopman neural operator family for solving partial differential equations. Beyond the original version of KNO, we develop multiple new variants of KNO based on different neural network architectures to improve the general applicability of our module. These variants are validated by mesh-independent and long-term prediction experiments implemented on representative PDEs (e.g., the Navier-Stokes equation and the Bateman-Burgers equation) and ERA5 (i.e., one of the largest high-resolution data sets of global-scale climate fields). These demonstrations suggest the potential of KoopmanLab to be considered in diverse applications of partial differential equations.
translated by 谷歌翻译
Temporal sentence grounding (TSG) aims to identify the temporal boundary of a specific segment from an untrimmed video by a sentence query. All existing works first utilize a sparse sampling strategy to extract a fixed number of video frames and then conduct multi-modal interactions with query sentence for reasoning. However, we argue that these methods have overlooked two indispensable issues: 1) Boundary-bias: The annotated target segment generally refers to two specific frames as corresponding start and end timestamps. The video downsampling process may lose these two frames and take the adjacent irrelevant frames as new boundaries. 2) Reasoning-bias: Such incorrect new boundary frames also lead to the reasoning bias during frame-query interaction, reducing the generalization ability of model. To alleviate above limitations, in this paper, we propose a novel Siamese Sampling and Reasoning Network (SSRN) for TSG, which introduces a siamese sampling mechanism to generate additional contextual frames to enrich and refine the new boundaries. Specifically, a reasoning strategy is developed to learn the inter-relationship among these frames and generate soft labels on boundaries for more accurate frame-query reasoning. Such mechanism is also able to supplement the absent consecutive visual semantics to the sampled sparse frames for fine-grained activity understanding. Extensive experiments demonstrate the effectiveness of SSRN on three challenging datasets.
translated by 谷歌翻译
Time-series anomaly detection is an important task and has been widely applied in the industry. Since manual data annotation is expensive and inefficient, most applications adopt unsupervised anomaly detection methods, but the results are usually sub-optimal and unsatisfactory to end customers. Weak supervision is a promising paradigm for obtaining considerable labels in a low-cost way, which enables the customers to label data by writing heuristic rules rather than annotating each instance individually. However, in the time-series domain, it is hard for people to write reasonable labeling functions as the time-series data is numerically continuous and difficult to be understood. In this paper, we propose a Label-Efficient Interactive Time-Series Anomaly Detection (LEIAD) system, which enables a user to improve the results of unsupervised anomaly detection by performing only a small amount of interactions with the system. To achieve this goal, the system integrates weak supervision and active learning collaboratively while generating labeling functions automatically using only a few labeled data. All of these techniques are complementary and can promote each other in a reinforced manner. We conduct experiments on three time-series anomaly detection datasets, demonstrating that the proposed system is superior to existing solutions in both weak supervision and active learning areas. Also, the system has been tested in a real scenario in industry to show its practicality.
translated by 谷歌翻译
This paper investigates the use of artificial neural networks (ANNs) to solve differential equations (DEs) and the construction of the loss function which meets both differential equation and its initial/boundary condition of a certain DE. In section 2, the loss function is generalized to $n^\text{th}$ order ordinary differential equation(ODE). Other methods of construction are examined in Section 3 and applied to three different models to assess their effectiveness.
translated by 谷歌翻译
Kernels are efficient in representing nonlocal dependence and they are widely used to design operators between function spaces. Thus, learning kernels in operators from data is an inverse problem of general interest. Due to the nonlocal dependence, the inverse problem can be severely ill-posed with a data-dependent singular inversion operator. The Bayesian approach overcomes the ill-posedness through a non-degenerate prior. However, a fixed non-degenerate prior leads to a divergent posterior mean when the observation noise becomes small, if the data induces a perturbation in the eigenspace of zero eigenvalues of the inversion operator. We introduce a data-adaptive prior to achieve a stable posterior whose mean always has a small noise limit. The data-adaptive prior's covariance is the inversion operator with a hyper-parameter selected adaptive to data by the L-curve method. Furthermore, we provide a detailed analysis on the computational practice of the data-adaptive prior, and demonstrate it on Toeplitz matrices and integral operators. Numerical tests show that a fixed prior can lead to a divergent posterior mean in the presence of any of the four types of errors: discretization error, model error, partial observation and wrong noise assumption. In contrast, the data-adaptive prior always attains posterior means with small noise limits.
translated by 谷歌翻译
Deep learning has been widely used for protein engineering. However, it is limited by the lack of sufficient experimental data to train an accurate model for predicting the functional fitness of high-order mutants. Here, we develop SESNet, a supervised deep-learning model to predict the fitness for protein mutants by leveraging both sequence and structure information, and exploiting attention mechanism. Our model integrates local evolutionary context from homologous sequences, the global evolutionary context encoding rich semantic from the universal protein sequence space and the structure information accounting for the microenvironment around each residue in a protein. We show that SESNet outperforms state-of-the-art models for predicting the sequence-function relationship on 26 deep mutational scanning datasets. More importantly, we propose a data augmentation strategy by leveraging the data from unsupervised models to pre-train our model. After that, our model can achieve strikingly high accuracy in prediction of the fitness of protein mutants, especially for the higher order variants (> 4 mutation sites), when finetuned by using only a small number of experimental mutation data (<50). The strategy proposed is of great practical value as the required experimental effort, i.e., producing a few tens of experimental mutation data on a given protein, is generally affordable by an ordinary biochemical group and can be applied on almost any protein.
translated by 谷歌翻译